22 research outputs found
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
Flexible and User-Centric Camera Calibration using Planar Fiducial Markers
The benefit of accurate camera calibration for recovering 3D structure from images is a well-studied topic. Recently 3D vision tools for end-user applications have become popular among large audiences, mostly unskilled in computer vision. This motivates the need for a flexible and user-centric camera calibration method which drastically releases the critical requirements on the calibration target and ensures that low-quality or faulty images provided by end users do not degrade the overall calibration and in effect the resulting 3D model. In this paper we present and advocate an approach to camera cal-ibration using fiducial markers, aiming at the accuracy of target calibration techniques without the requirement for a precise calibration pattern, to ease the calibration effort for the end-user. An extensive set of experiments with real images is presented which demonstrates improvements in the estimation of the parameters of the camera model as well as accuracy in the multi-view stereo reconstruction of large scale scenes. Pixel re-projection errors and ground truth errors obtained by our method are significantly lower compared to popular calibration routines, even though paper-printable and easy-to-use targets are employed.
The Mixed-Observable Constrained Linear Quadratic Regulator Problem: the Exact Solution and Practical Algorithms
This paper studies the problem of steering a linear time-invariant system
subject to state and input constraints towards a goal location that may be
inferred only through partial observations. We assume mixed-observable
settings, where the system's state is fully observable and the environment's
state defining the goal location is only partially observed. In these settings,
the planning problem is an infinite-dimensional optimization problem where the
objective is to minimize the expected cost. We show how to reformulate the
control problem as a finite-dimensional deterministic problem by optimizing
over a trajectory tree. Leveraging this result, we demonstrate that when the
environment is static, the observation model piecewise, and cost function
convex, the original control problem can be reformulated as a Mixed-Integer
Convex Program (MICP) that can be solved to global optimality using a
branch-and-bound algorithm. The effectiveness of the proposed approach is
demonstrated on navigation tasks, where the system has to reach a goal location
identified from partial observations
Vision and Learning for Deliberative Monocular Cluttered Flight
Cameras provide a rich source of information while being passive, cheap and
lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work
we present the first implementation of receding horizon control, which is
widely used in ground vehicles, with monocular vision as the only sensing mode
for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a
number of contributions: novel coupling of perception and control via relevant
and diverse, multiple interpretations of the scene around the robot, leveraging
recent advances in machine learning to showcase anytime budgeted cost-sensitive
feature selection, and fast non-linear regression for monocular depth
prediction. We empirically demonstrate the efficacy of our novel pipeline via
real world experiments of more than 2 kms through dense trees with a quadrotor
built from off-the-shelf parts. Moreover our pipeline is designed to combine
information from other modalities like stereo and lidar as well if available
Machine Learning Based Path Planning for Improved Rover Navigation (Pre-Print Version)
Enhanced AutoNav (ENav), the baseline surface navigation software for NASA's Perseverance rover, sorts a list of candidate paths for the rover to traverse, then uses the Approximate Clearance Evaluation (ACE) algorithm to evaluate whether the most highly ranked paths are safe. ACE is crucial for maintaining the safety of the rover, but is computationally expensive. If the most promising candidates in the list of paths are all found to be infeasible, ENav must continue to search the list and run time-consuming ACE evaluations until a feasible path is found. In this paper, we present two heuristics that, given a terrain heightmap around the rover, produce cost estimates that more effectively rank the candidate paths before ACE evaluation. The first heuristic uses Sobel operators and convolution to incorporate the cost of traversing high-gradient terrain. The second heuristic uses a machine learning (ML) model to predict areas that will be deemed untraversable by ACE. We used physics simulations to collect training data for the ML model and to run Monte Carlo trials to quantify navigation performance across a variety of terrains with various slopes and rock distributions. Compared to ENav's baseline performance, integrating the heuristics can lead to a significant reduction in ACE evaluations and average computation time per planning cycle, increase path efficiency, and maintain or improve the rate of successful traverses. This strategy of targeting specific bottlenecks with ML while maintaining the original ACE safety checks provides an example of how ML can be infused into planetary science missions and other safety-critical software
Rover Relocalization for Mars Sample Return by Virtual Template Synthesis and Matching
We consider the problem of rover relocalization in the context of the
notional Mars Sample Return campaign. In this campaign, a rover (R1) needs to
be capable of autonomously navigating and localizing itself within an area of
approximately 50 x 50 m using reference images collected years earlier by
another rover (R0). We propose a visual localizer that exhibits robustness to
the relatively barren terrain that we expect to find in relevant areas, and to
large lighting and viewpoint differences between R0 and R1. The localizer
synthesizes partial renderings of a mesh built from reference R0 images and
matches those to R1 images. We evaluate our method on a dataset totaling 2160
images covering the range of expected environmental conditions (terrain,
lighting, approach angle). Experimental results show the effectiveness of our
approach. This work informs the Mars Sample Return campaign on the choice of a
site where Perseverance (R0) will place a set of sample tubes for future
retrieval by another rover (R1).Comment: To appear in IEEE Robotics and Automation Letters (RA-L) and IEEE
International Conference on Robotics and Automation (ICRA 2021